Facebook’s Content Moderation Failures in Ethiopia
from Net Politics and Digital and Cyberspace Policy Program
from Net Politics and Digital and Cyberspace Policy Program

Facebook’s Content Moderation Failures in Ethiopia

Amhara militia members ride in the back of a truck towards a fight with the Tigray People's Liberation Front.
Amhara militia members ride in the back of a truck towards a fight with the Tigray People's Liberation Front. Reuters/Tiksa Negeri

Facebook has failed to moderate content in underserved countries. Facebook and other social media companies must invest more in local content moderation, instead of relying on global AI systems.

April 19, 2022 2:36 pm (EST)

Amhara militia members ride in the back of a truck towards a fight with the Tigray People's Liberation Front.
Amhara militia members ride in the back of a truck towards a fight with the Tigray People's Liberation Front. Reuters/Tiksa Negeri
Post
Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.

Disinformation and hate speech posted on social media amplifies and escalates social tensions, and can lead to real-world violence. The problem is particularly pronounced in small, developing, and already fragile states like Ethiopia and Burma, where Facebook propaganda has been linked to the genocide of the Rohingya people.

Facebook knows that this is a problem, yet it has barely adjusted its content moderation strategies in smaller countries struggling with conflict and ethnic divisions. The 2021 “Facebook Files” leak demonstrates this by documenting Facebook’s repeated content moderation failures in Ethiopia.

More on:

Technology and Innovation

Ethiopia

Social Issues

Artificial Intelligence (AI)

Social media has served as a lightning rod for ethnic conflict in Ethiopia, especially as the civil war has escalated. Some language posted on Facebook has verged on incitement to genocide. Dejene Assefa, an activist with over 120,000 followers, called for patriots to take up arms against the Tigrayan ethnic group in October 2021, writing that, “The war is with those you grew up with, your neighbor. If you can rid your forest of these thorns… victory will be yours.” His post was shared over nine hundred times before it was reported and taken down. Assefa’s words can still be found in posts across Facebook.

Calls to violence on social media during the Ethiopian civil war have been traced to real-world violence. The first major flashpoint came with the 2020 assassination of Hachalu Hundessa, a prominent singer who advocated for better treatment of the Oromo ethnic group. The riots after his murder were “supercharged by the almost-instant and widespread sharing of hate speech and incitement to violence on Facebook” and left at least 150 Ethiopians dead.

The riots following Hachalu Hundessa’s assassination pushed Facebook to translate its community standards into Amharic and Oromo for the first time. However, it only designated Ethiopia as an “at risk country” in 2021, after the civil war had already begun. According to Facebook, at risk countries are characterized by escalating social divisions and the threat that discourse on Facebook will spill over into violence. The three countries marked most at risk—the United States, India, and Brazil—each have “war rooms” set up where Facebook teams constantly monitor network activity. However, not all at risk countries are so well resourced.

The failures of Facebook in Ethiopia are a symptom of a deep geographic and linguistic inequality in the resources devoted to content moderation. Facebook supports posts in 110 languages, however, it only has the capacity to review content in 70 languages. According to Facebook whistleblower Frances Haugen, Facebook adds a new language to its content moderation program “usually under crisis conditions," and it can take up to a year to develop the most minimal moderation systems. As of late 2021, Facebook still lacked misinformation and hate speech classifiers in Ethiopia, critical resources deployed in other at risk countries. The company partnered with two content moderation organizations in Ethiopia: PesaCheck and AFP Fact Check. Combined, these companies have only five employees devoted to scanning content posted by Ethiopia’s seven million Facebook users. 

The five Ethiopians working as content moderators handle a small percentage of the posts flagged as problematic. To augment human moderation, Facebook uses an artificial intelligence (AI) “network-based moderation” system to filter the majority of content. Network-based moderation uses pattern recognition algorithms to flag posts based on previously identified objectionable content. This AI system was developed in-part because it does not require a comprehensive understanding of the language of a post and can theoretically be deployed in contexts where Facebook lacks the language capacity to perform comprehensive human moderation. Internal communications leaked in the Facebook Files show that network-based moderation is still experimental. It is also opaque: there are few public details about how the company understands and models malicious patterns of behavior, and about how successful the approach has been so far. Despite these problems, Facebook employees pushed to apply network-based moderation approaches to smaller, at risk countries where it had little language capacity, like Ethiopia.

More on:

Technology and Innovation

Ethiopia

Social Issues

Artificial Intelligence (AI)

If network-based AI moderation is to be successful in filtering out hate speech and disinformation, it must operate in tandem with a well-resourced team of content moderators with linguistic and cultural expertise in a country. AI built without a clear understanding of the local language or culture that it moderates cannot catch all the subtleties of a language. Facebook deployed its network-based moderation system believing that it would be able to fully replace human moderation.

The lessons of the tragedy in Ethiopia are clear: Facebook must proactively identify at risk countries and build up content moderation systems there based on local employees' linguistic and cultural knowledge. This requires directing more resources towards content moderation in the developing world; hiring more moderators with expertise in regional languages; and partnering with local organizations to better assess the threat-level and identify problematic content. Facebook cannot continue to rely on global systems to take the place of local human moderators. The human cost of continuing to under-resource smaller, fragile, non-English speaking countries is too high.

 

Caroline Allen is the Editor in Chief of the Brown Journal of World Affairs.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close